-
Notifications
You must be signed in to change notification settings - Fork 15.3k
[mlir][spirv] Use assemblyFormat to define groupNonUniform op assembly #115661
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
see llvm#73359 Declarative assemblyFormat ODS is more concise and requires less boilerplate than filling out CPP interfaces. Changes: * updates the Ops defined in `SPIRVNonUniformOps.td and SPIRVGroupOps.td` to use assemblyFormat. * Removes print/parse from `GroupOps.cpp` which is now generated by assemblyFormat * Updates tests to updated format (largely using <operand> in place of "operand" and complementing type information)
|
Thank you for submitting a Pull Request (PR) to the LLVM Project! This PR will be automatically labeled and the relevant teams will be notified. If you wish to, you can add reviewers by using the "Reviewers" section on this page. If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers. If you have further questions, they may be answered by the LLVM GitHub User Guide. You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums. |
|
@llvm/pr-subscribers-mlir-spirv @llvm/pr-subscribers-mlir Author: Yadong Chen (hahacyd) Changessee #73359 Declarative assemblyFormat ODS is more concise and requires less boilerplate than filling out CPP interfaces. Changes:
Patch is 51.11 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/115661.diff 7 Files Affected:
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGroupOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGroupOps.td
index dd25fbbce14b9a..a8743b196bfe77 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGroupOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGroupOps.td
@@ -661,6 +661,12 @@ def SPIRV_INTELSubgroupBlockReadOp : SPIRV_IntelVendorOp<"SubgroupBlockRead", []
let results = (outs
SPIRV_Type:$value
);
+
+ let hasCustomAssemblyFormat = 0;
+
+ let assemblyFormat = [{
+ $ptr attr-dict `:` type($ptr) `->` type($value)
+ }];
}
// -----
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVNonUniformOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVNonUniformOps.td
index a32f625ae82112..a1b866387e2ec0 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVNonUniformOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVNonUniformOps.td
@@ -26,7 +26,13 @@ class SPIRV_GroupNonUniformArithmeticOp<string mnemonic, Type type,
let results = (outs
SPIRV_ScalarOrVectorOf<type>:$result
- );
+ );
+
+ let hasCustomAssemblyFormat = 0;
+
+ let assemblyFormat = [{
+ $execution_scope $group_operation $value (`cluster_size``(` $cluster_size^ `)`)? attr-dict `:` type($value) (`,` type($cluster_size)^)? `->` type(results)
+ }];
}
// -----
@@ -318,24 +324,14 @@ def SPIRV_GroupNonUniformFAddOp : SPIRV_GroupNonUniformArithmeticOp<"GroupNonUni
<!-- End of AutoGen section -->
- ```
- scope ::= `"Workgroup"` | `"Subgroup"`
- operation ::= `"Reduce"` | `"InclusiveScan"` | `"ExclusiveScan"` | ...
- float-scalar-vector-type ::= float-type |
- `vector<` integer-literal `x` float-type `>`
- non-uniform-fadd-op ::= ssa-id `=` `spirv.GroupNonUniformFAdd` scope operation
- ssa-use ( `cluster_size` `(` ssa_use `)` )?
- `:` float-scalar-vector-type
- ```
-
#### Example:
```mlir
%four = spirv.Constant 4 : i32
%scalar = ... : f32
%vector = ... : vector<4xf32>
- %0 = spirv.GroupNonUniformFAdd "Workgroup" "Reduce" %scalar : f32
- %1 = spirv.GroupNonUniformFAdd "Subgroup" "ClusteredReduce" %vector cluster_size(%four) : vector<4xf32>
+ %0 = spirv.GroupNonUniformFAdd <Workgroup> <Reduce> %scalar : f32 -> f32
+ %1 = spirv.GroupNonUniformFAdd <Subgroup> <ClusteredReduce> %vector cluster_size(%four) : vector<4xf32>, i32 -> vector<4xf32>
```
}];
@@ -378,24 +374,14 @@ def SPIRV_GroupNonUniformFMaxOp : SPIRV_GroupNonUniformArithmeticOp<"GroupNonUni
<!-- End of AutoGen section -->
- ```
- scope ::= `"Workgroup"` | `"Subgroup"`
- operation ::= `"Reduce"` | `"InclusiveScan"` | `"ExclusiveScan"` | ...
- float-scalar-vector-type ::= float-type |
- `vector<` integer-literal `x` float-type `>`
- non-uniform-fmax-op ::= ssa-id `=` `spirv.GroupNonUniformFMax` scope operation
- ssa-use ( `cluster_size` `(` ssa_use `)` )?
- `:` float-scalar-vector-type
- ```
-
#### Example:
```mlir
%four = spirv.Constant 4 : i32
%scalar = ... : f32
%vector = ... : vector<4xf32>
- %0 = spirv.GroupNonUniformFMax "Workgroup" "Reduce" %scalar : f32
- %1 = spirv.GroupNonUniformFMax "Subgroup" "ClusteredReduce" %vector cluster_size(%four) : vector<4xf32>
+ %0 = spirv.GroupNonUniformFMax <Workgroup> <Reduce> %scalar : f32 -> f32
+ %1 = spirv.GroupNonUniformFMax <Subgroup> <ClusteredReduce> %vector cluster_size(%four) : vector<4xf32>, i32 -> vector<4xf32>
```
}];
@@ -438,24 +424,14 @@ def SPIRV_GroupNonUniformFMinOp : SPIRV_GroupNonUniformArithmeticOp<"GroupNonUni
<!-- End of AutoGen section -->
- ```
- scope ::= `"Workgroup"` | `"Subgroup"`
- operation ::= `"Reduce"` | `"InclusiveScan"` | `"ExclusiveScan"` | ...
- float-scalar-vector-type ::= float-type |
- `vector<` integer-literal `x` float-type `>`
- non-uniform-fmin-op ::= ssa-id `=` `spirv.GroupNonUniformFMin` scope operation
- ssa-use ( `cluster_size` `(` ssa_use `)` )?
- `:` float-scalar-vector-type
- ```
-
#### Example:
```mlir
%four = spirv.Constant 4 : i32
%scalar = ... : f32
%vector = ... : vector<4xf32>
- %0 = spirv.GroupNonUniformFMin "Workgroup" "Reduce" %scalar : f32
- %1 = spirv.GroupNonUniformFMin "Subgroup" "ClusteredReduce" %vector cluster_size(%four) : vector<4xf32>
+ %0 = spirv.GroupNonUniformFMin <Workgroup> <Reduce> %scalar : f32 -> i32
+ %1 = spirv.GroupNonUniformFMin <Subgroup> <ClusteredReduce> %vector cluster_size(%four) : vector<4xf32>, i32 -> vector<4xf32>
```
}];
@@ -495,24 +471,14 @@ def SPIRV_GroupNonUniformFMulOp : SPIRV_GroupNonUniformArithmeticOp<"GroupNonUni
<!-- End of AutoGen section -->
- ```
- scope ::= `"Workgroup"` | `"Subgroup"`
- operation ::= `"Reduce"` | `"InclusiveScan"` | `"ExclusiveScan"` | ...
- float-scalar-vector-type ::= float-type |
- `vector<` integer-literal `x` float-type `>`
- non-uniform-fmul-op ::= ssa-id `=` `spirv.GroupNonUniformFMul` scope operation
- ssa-use ( `cluster_size` `(` ssa_use `)` )?
- `:` float-scalar-vector-type
- ```
-
#### Example:
```mlir
%four = spirv.Constant 4 : i32
%scalar = ... : f32
%vector = ... : vector<4xf32>
- %0 = spirv.GroupNonUniformFMul "Workgroup" "Reduce" %scalar : f32
- %1 = spirv.GroupNonUniformFMul "Subgroup" "ClusteredReduce" %vector cluster_size(%four) : vector<4xf32>
+ %0 = spirv.GroupNonUniformFMul <Workgroup> <Reduce> %scalar : f32 -> f32
+ %1 = spirv.GroupNonUniformFMul <Subgroup> <ClusteredReduce> %vector cluster_size(%four) : vector<4xf32>, i32 -> vector<4xf32>
```
}];
@@ -550,24 +516,14 @@ def SPIRV_GroupNonUniformIAddOp : SPIRV_GroupNonUniformArithmeticOp<"GroupNonUni
<!-- End of AutoGen section -->
- ```
- scope ::= `"Workgroup"` | `"Subgroup"`
- operation ::= `"Reduce"` | `"InclusiveScan"` | `"ExclusiveScan"` | ...
- integer-scalar-vector-type ::= integer-type |
- `vector<` integer-literal `x` integer-type `>`
- non-uniform-iadd-op ::= ssa-id `=` `spirv.GroupNonUniformIAdd` scope operation
- ssa-use ( `cluster_size` `(` ssa_use `)` )?
- `:` integer-scalar-vector-type
- ```
-
#### Example:
```mlir
%four = spirv.Constant 4 : i32
%scalar = ... : i32
%vector = ... : vector<4xi32>
- %0 = spirv.GroupNonUniformIAdd "Workgroup" "Reduce" %scalar : i32
- %1 = spirv.GroupNonUniformIAdd "Subgroup" "ClusteredReduce" %vector cluster_size(%four) : vector<4xi32>
+ %0 = spirv.GroupNonUniformIAdd <Workgroup> <Reduce> %scalar : i32 -> i32
+ %1 = spirv.GroupNonUniformIAdd <Subgroup> <ClusteredReduce> %vector cluster_size(%four) : vector<4xi32>, i32 -> vector<4xi32>
```
}];
@@ -605,24 +561,14 @@ def SPIRV_GroupNonUniformIMulOp : SPIRV_GroupNonUniformArithmeticOp<"GroupNonUni
<!-- End of AutoGen section -->
- ```
- scope ::= `"Workgroup"` | `"Subgroup"`
- operation ::= `"Reduce"` | `"InclusiveScan"` | `"ExclusiveScan"` | ...
- integer-scalar-vector-type ::= integer-type |
- `vector<` integer-literal `x` integer-type `>`
- non-uniform-imul-op ::= ssa-id `=` `spirv.GroupNonUniformIMul` scope operation
- ssa-use ( `cluster_size` `(` ssa_use `)` )?
- `:` integer-scalar-vector-type
- ```
-
#### Example:
```mlir
%four = spirv.Constant 4 : i32
%scalar = ... : i32
%vector = ... : vector<4xi32>
- %0 = spirv.GroupNonUniformIMul "Workgroup" "Reduce" %scalar : i32
- %1 = spirv.GroupNonUniformIMul "Subgroup" "ClusteredReduce" %vector cluster_size(%four) : vector<4xi32>
+ %0 = spirv.GroupNonUniformIMul <Workgroup> <Reduce> %scalar : i32 -> i32
+ %1 = spirv.GroupNonUniformIMul <Subgroup> <ClusteredReduce> %vector cluster_size(%four) : vector<4xi32>, i32 -> vector<4xi32>
```
}];
@@ -662,24 +608,14 @@ def SPIRV_GroupNonUniformSMaxOp : SPIRV_GroupNonUniformArithmeticOp<"GroupNonUni
<!-- End of AutoGen section -->
- ```
- scope ::= `"Workgroup"` | `"Subgroup"`
- operation ::= `"Reduce"` | `"InclusiveScan"` | `"ExclusiveScan"` | ...
- integer-scalar-vector-type ::= integer-type |
- `vector<` integer-literal `x` integer-type `>`
- non-uniform-smax-op ::= ssa-id `=` `spirv.GroupNonUniformSMax` scope operation
- ssa-use ( `cluster_size` `(` ssa_use `)` )?
- `:` integer-scalar-vector-type
- ```
-
#### Example:
```mlir
%four = spirv.Constant 4 : i32
%scalar = ... : i32
%vector = ... : vector<4xi32>
- %0 = spirv.GroupNonUniformSMax "Workgroup" "Reduce" %scalar : i32
- %1 = spirv.GroupNonUniformSMax "Subgroup" "ClusteredReduce" %vector cluster_size(%four) : vector<4xi32>
+ %0 = spirv.GroupNonUniformSMax <Workgroup> <Reduce> %scalar : i32
+ %1 = spirv.GroupNonUniformSMax <Subgroup> <ClusteredReduce> %vector cluster_size(%four) : vector<4xi32>, i32 -> vector<4xi32>
```
}];
@@ -719,24 +655,14 @@ def SPIRV_GroupNonUniformSMinOp : SPIRV_GroupNonUniformArithmeticOp<"GroupNonUni
<!-- End of AutoGen section -->
- ```
- scope ::= `"Workgroup"` | `"Subgroup"`
- operation ::= `"Reduce"` | `"InclusiveScan"` | `"ExclusiveScan"` | ...
- integer-scalar-vector-type ::= integer-type |
- `vector<` integer-literal `x` integer-type `>`
- non-uniform-smin-op ::= ssa-id `=` `spirv.GroupNonUniformSMin` scope operation
- ssa-use ( `cluster_size` `(` ssa_use `)` )?
- `:` integer-scalar-vector-type
- ```
-
#### Example:
```mlir
%four = spirv.Constant 4 : i32
%scalar = ... : i32
%vector = ... : vector<4xi32>
- %0 = spirv.GroupNonUniformSMin "Workgroup" "Reduce" %scalar : i32
- %1 = spirv.GroupNonUniformSMin "Subgroup" "ClusteredReduce" %vector cluster_size(%four) : vector<4xi32>
+ %0 = spirv.GroupNonUniformSMin <Workgroup> <Reduce> %scalar : i32 -> i32
+ %1 = spirv.GroupNonUniformSMin <Subgroup> <ClusteredReduce> %vector cluster_size(%four) : vector<4xi32>, i32 -> vector<4xi32>
```
}];
@@ -992,24 +918,14 @@ def SPIRV_GroupNonUniformUMaxOp : SPIRV_GroupNonUniformArithmeticOp<"GroupNonUni
<!-- End of AutoGen section -->
- ```
- scope ::= `"Workgroup"` | `"Subgroup"`
- operation ::= `"Reduce"` | `"InclusiveScan"` | `"ExclusiveScan"` | ...
- integer-scalar-vector-type ::= integer-type |
- `vector<` integer-literal `x` integer-type `>`
- non-uniform-umax-op ::= ssa-id `=` `spirv.GroupNonUniformUMax` scope operation
- ssa-use ( `cluster_size` `(` ssa_use `)` )?
- `:` integer-scalar-vector-type
- ```
-
#### Example:
```mlir
%four = spirv.Constant 4 : i32
%scalar = ... : i32
%vector = ... : vector<4xi32>
- %0 = spirv.GroupNonUniformUMax "Workgroup" "Reduce" %scalar : i32
- %1 = spirv.GroupNonUniformUMax "Subgroup" "ClusteredReduce" %vector cluster_size(%four) : vector<4xi32>
+ %0 = spirv.GroupNonUniformUMax <Workgroup> <Reduce> %scalar : i32 -> i32
+ %1 = spirv.GroupNonUniformUMax <Subgroup> <ClusteredReduce> %vector cluster_size(%four) : vector<4xi32>, i32 -> vector<4xi32>
```
}];
@@ -1050,24 +966,14 @@ def SPIRV_GroupNonUniformUMinOp : SPIRV_GroupNonUniformArithmeticOp<"GroupNonUni
<!-- End of AutoGen section -->
- ```
- scope ::= `"Workgroup"` | `"Subgroup"`
- operation ::= `"Reduce"` | `"InclusiveScan"` | `"ExclusiveScan"` | ...
- integer-scalar-vector-type ::= integer-type |
- `vector<` integer-literal `x` integer-type `>`
- non-uniform-umin-op ::= ssa-id `=` `spirv.GroupNonUniformUMin` scope operation
- ssa-use ( `cluster_size` `(` ssa_use `)` )?
- `:` integer-scalar-vector-type
- ```
-
#### Example:
```mlir
%four = spirv.Constant 4 : i32
%scalar = ... : i32
%vector = ... : vector<4xi32>
- %0 = spirv.GroupNonUniformUMin "Workgroup" "Reduce" %scalar : i32
- %1 = spirv.GroupNonUniformUMin "Subgroup" "ClusteredReduce" %vector cluster_size(%four) : vector<4xi32>
+ %0 = spirv.GroupNonUniformUMin <Workgroup> <Reduce> %scalar : i32 -> i32
+ %1 = spirv.GroupNonUniformUMin <Subgroup> <ClusteredReduce> %vector cluster_size(%four) : vector<4xi32>, i32 -> vector<4xi32>
```
}];
@@ -1113,9 +1019,9 @@ def SPIRV_GroupNonUniformBitwiseAndOp :
%four = spirv.Constant 4 : i32
%scalar = ... : i32
%vector = ... : vector<4xi32>
- %0 = spirv.GroupNonUniformBitwiseAnd "Workgroup" "Reduce" %scalar : i32
- %1 = spirv.GroupNonUniformBitwiseAnd "Subgroup" "ClusteredReduce"
- %vector cluster_size(%four) : vector<4xi32>
+ %0 = spirv.GroupNonUniformBitwiseAnd <Workgroup> <Reduce> %scalar : i32 -> i32
+ %1 = spirv.GroupNonUniformBitwiseAnd <Subgroup> <ClusteredReduce>
+ %vector cluster_size(%four) : vector<4xi32>, i32 -> vector<4xi32>
```
}];
@@ -1163,9 +1069,9 @@ def SPIRV_GroupNonUniformBitwiseOrOp :
%four = spirv.Constant 4 : i32
%scalar = ... : i32
%vector = ... : vector<4xi32>
- %0 = spirv.GroupNonUniformBitwiseOr "Workgroup" "Reduce" %scalar : i32
- %1 = spirv.GroupNonUniformBitwiseOr "Subgroup" "ClusteredReduce"
- %vector cluster_size(%four) : vector<4xi32>
+ %0 = spirv.GroupNonUniformBitwiseOr <Workgroup> <Reduce> %scalar : i32 -> i32
+ %1 = spirv.GroupNonUniformBitwiseOr <Subgroup> <ClusteredReduce>
+ %vector cluster_size(%four) : vector<4xi32>, i32 -> vector<4xi32>
```
}];
@@ -1213,9 +1119,9 @@ def SPIRV_GroupNonUniformBitwiseXorOp :
%four = spirv.Constant 4 : i32
%scalar = ... : i32
%vector = ... : vector<4xi32>
- %0 = spirv.GroupNonUniformBitwiseXor "Workgroup" "Reduce" %scalar : i32
- %1 = spirv.GroupNonUniformBitwiseXor "Subgroup" "ClusteredReduce"
- %vector cluster_size(%four) : vector<4xi32>
+ %0 = spirv.GroupNonUniformBitwiseXor <Workgroup> <Reduce> %scalar : i32 -> i32
+ %1 = spirv.GroupNonUniformBitwiseXor <Subgroup> <ClusteredReduce>
+ %vector cluster_size(%four) : vector<4xi32>, i32 -> vector<4xi32>
```
}];
@@ -1263,9 +1169,9 @@ def SPIRV_GroupNonUniformLogicalAndOp :
%four = spirv.Constant 4 : i32
%scalar = ... : i1
%vector = ... : vector<4xi1>
- %0 = spirv.GroupNonUniformLogicalAnd "Workgroup" "Reduce" %scalar : i1
- %1 = spirv.GroupNonUniformLogicalAnd "Subgroup" "ClusteredReduce"
- %vector cluster_size(%four) : vector<4xi1>
+ %0 = spirv.GroupNonUniformLogicalAnd <Workgroup> <Reduce> %scalar : i1 -> i1
+ %1 = spirv.GroupNonUniformLogicalAnd <Subgroup> <ClusteredReduce>
+ %vector cluster_size(%four) : vector<4xi1>, i32 -> vector<4xi1>
```
}];
@@ -1313,9 +1219,9 @@ def SPIRV_GroupNonUniformLogicalOrOp :
%four = spirv.Constant 4 : i32
%scalar = ... : i1
%vector = ... : vector<4xi1>
- %0 = spirv.GroupNonUniformLogicalOr "Workgroup" "Reduce" %scalar : i1
- %1 = spirv.GroupNonUniformLogicalOr "Subgroup" "ClusteredReduce"
- %vector cluster_size(%four) : vector<4xi1>
+ %0 = spirv.GroupNonUniformLogicalOr <Workgroup> <Reduce> %scalar : i1 -> i1
+ %1 = spirv.GroupNonUniformLogicalOr <Subgroup> <ClusteredReduce>
+ %vector cluster_size(%four) : vector<4xi1>, i32 -> vector<4xi1>
```
}];
@@ -1363,9 +1269,9 @@ def SPIRV_GroupNonUniformLogicalXorOp :
%four = spirv.Constant 4 : i32
%scalar = ... : i1
%vector = ... : vector<4xi1>
- %0 = spirv.GroupNonUniformLogicalXor "Workgroup" "Reduce" %scalar : i1
- %1 = spirv.GroupNonUniformLogicalXor "Subgroup" "ClusteredReduce"
- %vector cluster_size(%four) : vector<4xi>
+ %0 = spirv.GroupNonUniformLogicalXor <Workgroup> <Reduce> %scalar : i1 -> i1
+ %1 = spirv.GroupNonUniformLogicalXor <Subgroup> <ClusteredReduce>
+ %vector cluster_size(%four) : vector<4xi1>, i32 -> vector<4xi1>
```
}];
diff --git a/mlir/lib/Dialect/SPIRV/IR/GroupOps.cpp b/mlir/lib/Dialect/SPIRV/IR/GroupOps.cpp
index 2e5a2aab52a160..8aeafda0eb755a 100644
--- a/mlir/lib/Dialect/SPIRV/IR/GroupOps.cpp
+++ b/mlir/lib/Dialect/SPIRV/IR/GroupOps.cpp
@@ -20,70 +20,6 @@ using namespace mlir::spirv::AttrNames;
namespace mlir::spirv {
-template <typename OpTy>
-static ParseResult parseGroupNonUniformArithmeticOp(OpAsmParser &parser,
- OperationState &state) {
- spirv::Scope executionScope;
- GroupOperation groupOperation;
- OpAsmParser::UnresolvedOperand valueInfo;
- if (spirv::parseEnumStrAttr<spirv::ScopeAttr>(
- executionScope, parser, state,
- OpTy::getExecutionScopeAttrName(state.name)) ||
- spirv::parseEnumStrAttr<GroupOperationAttr>(
- groupOperation, parser, state,
- OpTy::getGroupOperationAttrName(state.name)) ||
- parser.parseOperand(valueInfo))
- return failure();
-
- std::optional<OpAsmParser::UnresolvedOperand> clusterSizeInfo;
- if (succeeded(parser.parseOptionalKeyword(kClusterSize))) {
- clusterSizeInfo = OpAsmParser::UnresolvedOperand();
- if (parser.parseLParen() || parser.parseOperand(*clusterSizeInfo) ||
- parser.parseRParen())
- return failure();
- }
-
- Type resultType;
- if (parser.parseColonType(resultType))
- return failure();
-
- if (parser.resolveOperand(valueInfo, resultType, state.operands))
- return failure();
-
- if (clusterSizeInfo) {
- Type i32Type = parser.getBuilder().getIntegerType(32);
- if (parser.resolveOperand(*clusterSizeInfo, i32Type, state.operands))
- return failure();
- }
-
- return parser.addTypeToList(resultType, state.types);
-}
-
-template <typename GroupNonUniformArithmeticOpTy>
-static void printGroupNonUniformArithmeticOp(Operation *groupOp,
- OpAsmPrinter &printer) {
- printer
- << " \""
- << stringifyScope(
- groupOp
- ->getAttrOfType<spirv::ScopeAttr>(
- GroupNonUniformArithmeticOpTy::getExecutionScopeAttrName(
- groupOp->getName()))
- .getValue())
- << "\" \""
- << stringifyGroupOperation(
- groupOp
- ->getAttrOfType<GroupOperationAttr>(
- GroupNonUniformArithmeticOpTy::getGroupOperationAttrName(
- groupOp->getName()))
- .getValue())
- << "\" " << groupOp->getOperand(0);
-
- if (groupOp->getNumOperands() > 1)
- printer << " " << kClusterSize << '(' << groupOp->getOperand(1) << ')';
- printer << " : " << groupOp->getResult(0).getType();
-}
-
template <typename OpTy>
static LogicalResult verifyGroupNonUniformArithmeticOp(Operation *groupOp) {
spirv::Scope scope =
@@ -248,16 +184,6 @@ LogicalResult GroupNonUniformFAddOp::verify() {
return verifyGroupNonUniformArithmeticOp<GroupNonUniformFAddOp>(*this);
}
-ParseResult GroupNonUniformFAddOp::parse(OpAsmParser &parser,
- OperationState &result) {
- return parseGroupNonUniformArithmeticOp<GroupNonUniformFAddOp>(parser,
- result);
-}
-
-void GroupNonUniformFAddOp::print(OpAsmPrinter &p) {
- printGroupNonUniformArithmeticOp<GroupNonUniformFAddOp>(*this, p);
-}
-
//===----------------------------------------------------------------------===//
// spirv.GroupNonUniformFMaxOp
//===----------------------------------------------------------------------===//
@@ -266,16 +192,6 @@ LogicalResult GroupNonUniformFMaxOp::verify() {
return verifyGroupNonUniformArithmeticOp<GroupNonUniformFMaxOp>(*this);
}
-ParseResult GroupNonUniformFMaxOp::parse(OpAsmParser &parser,
- OperationState &result) {
...
[truncated]
|
see #73359
Declarative assemblyFormat ODS is more concise and requires less boilerplate than filling out CPP interfaces.
Changes:
SPIRVNonUniformOps.td and SPIRVGroupOps.tdto use assemblyFormat.GroupOps.cppwhich is now generated by assemblyFormat